Word embeddings (WE) have received much attention recently as word to numeric vectors architec-ture for all text processing approaches and has been a great asset for a large variety of NLP tasks. Most of text processing task tried to convert text components like Sentences to numeric matrix to apply their processing algorithms. But the most important problems in all word vector-based text processing approaches are di erent Sentences size and as a result, di erent dimension of Sentences matrices. In this paper, we suggest an e cient but simple statistical method to convert text sen-tences into equal dimension and normalized matrices Proposed method aims to combines three most e cient methods (averaging based, most likely n-grams, and words mover distance) to use their advantages and reduce their constraints. The unique size resulting matrix does not depend on lan-guage, Subject and scope of the text and words semantic concepts. Our results demonstrate that normalized matrices capture complementary aspects of most text processing tasks such as coherence evaluation, text summarization, text classi cation, automatic essay scoring, and question answering.